We initiate the study of proper losses for evaluating generative models in the discrete setting. Unlike traditional proper losses, we treat both the generative model and the target distribution as black-boxes, only assuming ability to draw i.i.d. samples. We define a loss to be black-box proper if the generative distribution that minimizes expected loss is equal to the target distribution. Using techniques from statistical estimation theory, we give a general construction and characterization of black-box proper losses: they must take a polynomial form, and the number of draws from the model and target distribution must exceed the degree of the polynomial. The characterization rules out a loss whose expectation is the cross-entropy between the target distribution and the model. By extending the construction to arbitrary sampling schemes such as Poisson sampling, however, we show that one can construct such a loss.
translated by 谷歌翻译
Top-$ k $分类是对信息检索,图像分类和其他极端分类设置中广泛使用的多类分类的概括。已经提出了几种类似铰链的(分段线性)替代物,但所有这些都不是不一致的或不一致的。对于提出的凸状替代物(即多面体),我们应用了Finocchiaro等人的最新嵌入框架。 (2019; 2022)确定替代物是一致的预测问题。这些问题都可以解释为顶部 - $ K $分类的变体,这可能与某些应用程序更好。我们利用此分析来得出对条件标签分布的限制,在该分布中,这些拟议的替代物在顶级$ k $中变得一致。有人进一步建议,对于顶部$ k $,每个凸铰链样的替代物都必须不一致。但是,我们使用相同的嵌入框架为此问题提供第一个一致的多面体代理。
translated by 谷歌翻译
我们正式化并研究通过嵌入设计凸替代损失函数的自然方法,例如分类,排名或结构化预测等问题。在这种方法中,一个人将每一个有限的预测(例如排名)嵌入$ r^d $中的一个点,将原始损失值分配给这些要点,并以某种方式“凸出”损失以获得替代物。我们在这种方法和多面体(分段线性凸)的替代损失之间建立了牢固的联系:每个离散损失都被一些多面体损失嵌入,并且每个多面体损失都嵌入了一些离散的损失。此外,嵌入会产生一致的链接功能以及线性替代遗憾界限。正如我们用几个示例所说明的那样,我们的结果具有建设性。特别是,我们的框架为文献中各种多面体替代物以及不一致的替代物提供了简洁的证据或不一致的证据,它进一步揭示了这些代理人一致的离散损失。我们继续展示嵌入的其他结构,例如嵌入和匹配贝叶斯风险的等效性以及各种非算术概念的等效性。使用这些结果,我们确定与多面体替代物一起工作时,间接启发是一致性的必要条件也足够了。
translated by 谷歌翻译
我们正规化并研究通过嵌入式设计凸代理损失功能的自然方法,诸如分类,排名或结构化预测等问题。在这种方法中,一个将每个主要的预测(例如\排名)嵌入$ \ mathbb {r} ^ d $中的一个点,将原始损耗值分配给这些点,并以某种方式“凸出”损失获得代理人。我们在这种方法和多面体(分段 - 线性凸)代理损失之间建立了强大的联系。鉴于任何多面体损失$ L $,我们提供了一个联系功能的建设,其中$ l $是它嵌入的损失的一致代理人。相反,我们展示了如何为任何给定的离散损失构建一致的多面体代理。我们的框架在文献中产生了各种多面体代理人的一致性或不一致的简洁证明,并且对于不一致的代理人,它进一步揭示了这些替代品的离散损失是一致的。我们展示了一些额外的嵌入结构,例如嵌入和匹配贝叶斯风险的等价,以及各种概念的非赎罪概念的等价。使用这些结果,我们建立了间接诱导,在使用多面体替代品时也足够了。
translated by 谷歌翻译
Applying deep learning concepts from image detection and graph theory has greatly advanced protein-ligand binding affinity prediction, a challenge with enormous ramifications for both drug discovery and protein engineering. We build upon these advances by designing a novel deep learning architecture consisting of a 3-dimensional convolutional neural network utilizing channel-wise attention and two graph convolutional networks utilizing attention-based aggregation of node features. HAC-Net (Hybrid Attention-Based Convolutional Neural Network) obtains state-of-the-art results on the PDBbind v.2016 core set, the most widely recognized benchmark in the field. We extensively assess the generalizability of our model using multiple train-test splits, each of which maximizes differences between either protein structures, protein sequences, or ligand extended-connectivity fingerprints. Furthermore, we perform 10-fold cross-validation with a similarity cutoff between SMILES strings of ligands in the training and test sets, and also evaluate the performance of HAC-Net on lower-quality data. We envision that this model can be extended to a broad range of supervised learning problems related to structure-based biomolecular property prediction. All of our software is available as open source at https://github.com/gregory-kyro/HAC-Net/.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
Human motion prediction is a complex task as it involves forecasting variables over time on a graph of connected sensors. This is especially true in the case of few-shot learning, where we strive to forecast motion sequences for previously unseen actions based on only a few examples. Despite this, almost all related approaches for few-shot motion prediction do not incorporate the underlying graph, while it is a common component in classical motion prediction. Furthermore, state-of-the-art methods for few-shot motion prediction are restricted to motion tasks with a fixed output space meaning these tasks are all limited to the same sensor graph. In this work, we propose to extend recent works on few-shot time-series forecasting with heterogeneous attributes with graph neural networks to introduce the first few-shot motion approach that explicitly incorporates the spatial graph while also generalizing across motion tasks with heterogeneous sensors. In our experiments on motion tasks with heterogeneous sensors, we demonstrate significant performance improvements with lifts from 10.4% up to 39.3% compared to best state-of-the-art models. Moreover, we show that our model can perform on par with the best approach so far when evaluating on tasks with a fixed output space while maintaining two magnitudes fewer parameters.
translated by 谷歌翻译
Video segmentation consists of a frame-by-frame selection process of meaningful areas related to foreground moving objects. Some applications include traffic monitoring, human tracking, action recognition, efficient video surveillance, and anomaly detection. In these applications, it is not rare to face challenges such as abrupt changes in weather conditions, illumination issues, shadows, subtle dynamic background motions, and also camouflage effects. In this work, we address such shortcomings by proposing a novel deep learning video segmentation approach that incorporates residual information into the foreground detection learning process. The main goal is to provide a method capable of generating an accurate foreground detection given a grayscale video. Experiments conducted on the Change Detection 2014 and on the private dataset PetrobrasROUTES from Petrobras support the effectiveness of the proposed approach concerning some state-of-the-art video segmentation techniques, with overall F-measures of $\mathbf{0.9535}$ and $\mathbf{0.9636}$ in the Change Detection 2014 and PetrobrasROUTES datasets, respectively. Such a result places the proposed technique amongst the top 3 state-of-the-art video segmentation methods, besides comprising approximately seven times less parameters than its top one counterpart.
translated by 谷歌翻译
Scene change detection is an image processing problem related to partitioning pixels of a digital image into foreground and background regions. Mostly, visual knowledge-based computer intelligent systems, like traffic monitoring, video surveillance, and anomaly detection, need to use change detection techniques. Amongst the most prominent detection methods, there are the learning-based ones, which besides sharing similar training and testing protocols, differ from each other in terms of their architecture design strategies. Such architecture design directly impacts on the quality of the detection results, and also in the device resources capacity, like memory. In this work, we propose a novel Multiscale Cascade Residual Convolutional Neural Network that integrates multiscale processing strategy through a Residual Processing Module, with a Segmentation Convolutional Neural Network. Experiments conducted on two different datasets support the effectiveness of the proposed approach, achieving average overall $\boldsymbol{F\text{-}measure}$ results of $\boldsymbol{0.9622}$ and $\boldsymbol{0.9664}$ over Change Detection 2014 and PetrobrasROUTES datasets respectively, besides comprising approximately eight times fewer parameters. Such obtained results place the proposed technique amongst the top four state-of-the-art scene change detection methods.
translated by 谷歌翻译
Research on remote sensing image classification significantly impacts essential human routine tasks such as urban planning and agriculture. Nowadays, the rapid advance in technology and the availability of many high-quality remote sensing images create a demand for reliable automation methods. The current paper proposes two novel deep learning-based architectures for image classification purposes, i.e., the Discriminant Deep Image Prior Network and the Discriminant Deep Image Prior Network+, which combine Deep Image Prior and Triplet Networks learning strategies. Experiments conducted over three well-known public remote sensing image datasets achieved state-of-the-art results, evidencing the effectiveness of using deep image priors for remote sensing image classification.
translated by 谷歌翻译